Goto

Collaborating Authors

 geoffrey hinton


Practical Deep Learning with Bayesian Principles

Kazuki Osawa, Siddharth Swaroop, Mohammad Emtiyaz E. Khan, Anirudh Jain, Runa Eschenhagen, Richard E. Turner, Rio Yokota

Neural Information Processing Systems

Figure 2: distributed calculation algorithmic Momentum Itiswell improv to Adam, where 1isthemomentumµin in Adaminit.xavier_normalin V methods, and AUR andissecond-best significantly and Adam Wealsosho7] in Figures itscalibration ImageNet, required Wealso different protocol 16,31,8,32] tocompare Wealsoborro16,30], sho reporting Ideally, we data.







Appendix: Improving Contrastive Learning on Imbalanced Seed Data via Open-World Sampling

Neural Information Processing Systems

B) Details of the employed hyperparameters. For all fine-tuning, the optimizer is set as SGD with momentum of 0.9 and initial learning rate of 30 following [ When fine-tuning for linear separability performance, we train for 30 epochs and decrease the learning rate by 10 times at epochs 10 and 20. The initial lr is set as 0.02 and employing cosine learning rate decay without


Chatbots will be able to teach children TWICE as fast as teachers in the next 10 years, says the 'godfather of AI'

Daily Mail - Science & tech

Chatbots will be able to teach children more than twice as fast as teachers can within the next decade, the so-called godfather of AI has predicted. Geoffrey Hinton, who won a Nobel Prize for his work on the technology, also claimed AI personal tutors would'be much more efficient and less boring'. Speaking at Gitex Europe, the British computer scientist said: 'It's not there yet, but it's coming, and so we'll get much better education at many levels.' AI personal tutors are already being trialled in UK schools, with the technology now able to talk directly to the student and adapt lesson plans to their knowledge level. The government has already funnelled millions of pounds into AI education initiatives – though it has claimed the technology will'absolutely not' replace teachers.


'Godfather of AI' reveals the startling odds that artificial intelligence will take over humanity

Daily Mail - Science & tech

Scientist and physicist Geoffrey Hinton believes there could be a one in five chance that humanity will eventually be taken over by artificial intelligence. Hinton, a Nobel laureate in physics who's been dubbed the'godfather of AI', made the startling prediction in an April 1 interview with CBS News that was aired on Saturday morning. 'I'm in the unfortunate position of happening to agree with Elon Musk on this, which is that there's a 10 to 20 percent chance that these things will take over, but that's just a wild guess,' Hinton said. Besides his cost-cutting responsibilities in the federal government, Musk is the chief executive of xAI, the company that made the AI chatbot Grok. Musk has said AI will become smarter than the entire human race by 2029.


The machine learning victories at the 2024 Nobel Prize Awards and how to explain them

AIHub

Anna Demming reports on what the prizes were awarded for and how finding connections between the two approaches to machine learning may help towards explaining how "black box" algorithms reach their conclusions. Few saw it coming when on 8th October 2024 the Nobel Committee awarded the 2024 Nobel Prize for Physics to John Hopfield for his Hopfield networks and Geoffrey Hinton for his Boltzmann machines as seminal developments towards machine learning that have statistical physics at the heart of them. The next day machine learning albeit using a different architecture bagged half of the Nobel Prize for Chemistry as well, with the award going to Demis Hassabis and John Jumper for the development of an algorithm that predicts protein folding conformations. The other half of the Chemistry Nobel was awarded to David Baker for successfully building new proteins. While the AI takeover at this year's Nobel announcements for Physics and Chemistry came as surprise to most, there has been some keen interest on how these apparently different approaches to machine learning might actually reduce to the same thing, revealing new ways of extracting some fundamental explainability from the generative AI algorithms that have so far been considered effectively "black boxes".